Multilingual BERT (mBERT) has demonstrated considerable cross-lingual syntactic ability, whereby it enables effective zero-shot cross-lingual transfer of syntactic knowledge. The transfer is more successful between some languages, but it is not well understood what leads to this variation and whether it fairly reflects difference between languages. In this work, we investigate the distributions of grammatical relations induced from mBERT in the context of 24 typologically different languages. We demonstrate that the distance between the distributions of different languages is highly consistent with the syntactic difference in terms of linguistic formalisms. Such difference learnt via self-supervision plays a crucial role in the zero-shot transfer performance and can be predicted by variation in morphosyntactic properties between languages. These results suggest that mBERT properly encodes languages in a way consistent with linguistic diversity and provide insights into the mechanism of cross-lingual transfer.
translated by 谷歌翻译
Label smoothing is a regularization technique widely used in supervised learning to improve the generalization of models on various tasks, such as image classification and machine translation. However, the effectiveness of label smoothing in multi-hop question answering (MHQA) has yet to be well studied. In this paper, we systematically analyze the role of label smoothing on various modules of MHQA and propose F1 smoothing, a novel label smoothing technique specifically designed for machine reading comprehension (MRC) tasks. We evaluate our method on the HotpotQA dataset and demonstrate its superiority over several strong baselines, including models that utilize complex attention mechanisms. Our results suggest that label smoothing can be effective in MHQA, but the choice of smoothing strategy can significantly affect performance.
translated by 谷歌翻译
We present DiffusionBERT, a new generative masked language model based on discrete diffusion models. Diffusion models and many pre-trained language models have a shared training objective, i.e., denoising, making it possible to combine the two powerful models and enjoy the best of both worlds. On the one hand, diffusion models offer a promising training strategy that helps improve the generation quality. On the other hand, pre-trained denoising language models (e.g., BERT) can be used as a good initialization that accelerates convergence. We explore training BERT to learn the reverse process of a discrete diffusion process with an absorbing state and elucidate several designs to improve it. First, we propose a new noise schedule for the forward diffusion process that controls the degree of noise added at each step based on the information of each token. Second, we investigate several designs of incorporating the time step into BERT. Experiments on unconditional text generation demonstrate that DiffusionBERT achieves significant improvement over existing diffusion models for text (e.g., D3PM and Diffusion-LM) and previous generative masked language models in terms of perplexity and BLEU score.
translated by 谷歌翻译
Generating molecules that bind to specific proteins is an important but challenging task in drug discovery. Previous works usually generate atoms in an auto-regressive way, where element types and 3D coordinates of atoms are generated one by one. However, in real-world molecular systems, the interactions among atoms in an entire molecule are global, leading to the energy function pair-coupled among atoms. With such energy-based consideration, the modeling of probability should be based on joint distributions, rather than sequentially conditional ones. Thus, the unnatural sequentially auto-regressive modeling of molecule generation is likely to violate the physical rules, thus resulting in poor properties of the generated molecules. In this work, a generative diffusion model for molecular 3D structures based on target proteins as contextual constraints is established, at a full-atom level in a non-autoregressive way. Given a designated 3D protein binding site, our model learns the generative process that denoises both element types and 3D coordinates of an entire molecule, with an equivariant network. Experimentally, the proposed method shows competitive performance compared with prevailing works in terms of high affinity with proteins and appropriate molecule sizes as well as other drug properties such as drug-likeness of the generated molecules.
translated by 谷歌翻译
Adversarial training is one of the most powerful methods to improve the robustness of pre-trained language models (PLMs). However, this approach is typically more expensive than traditional fine-tuning because of the necessity to generate adversarial examples via gradient descent. Delving into the optimization process of adversarial training, we find that robust connectivity patterns emerge in the early training phase (typically $0.15\sim0.3$ epochs), far before parameters converge. Inspired by this finding, we dig out robust early-bird tickets (i.e., subnetworks) to develop an efficient adversarial training method: (1) searching for robust tickets with structured sparsity in the early stage; (2) fine-tuning robust tickets in the remaining time. To extract the robust tickets as early as possible, we design a ticket convergence metric to automatically terminate the searching process. Experiments show that the proposed efficient adversarial training method can achieve up to $7\times \sim 13 \times$ training speedups while maintaining comparable or even better robustness compared to the most competitive state-of-the-art adversarial training methods.
translated by 谷歌翻译
Recent works on Lottery Ticket Hypothesis have shown that pre-trained language models (PLMs) contain smaller matching subnetworks(winning tickets) which are capable of reaching accuracy comparable to the original models. However, these tickets are proved to be notrobust to adversarial examples, and even worse than their PLM counterparts. To address this problem, we propose a novel method based on learning binary weight masks to identify robust tickets hidden in the original PLMs. Since the loss is not differentiable for the binary mask, we assign the hard concrete distribution to the masks and encourage their sparsity using a smoothing approximation of L0 regularization.Furthermore, we design an adversarial loss objective to guide the search for robust tickets and ensure that the tickets perform well bothin accuracy and robustness. Experimental results show the significant improvement of the proposed method over previous work on adversarial robustness evaluation.
translated by 谷歌翻译
用于提取和抽象性摘要系统的传统培训范例始终仅使用令牌级别或句子级培训目标。但是,始终从摘要级别评估输出摘要,从而导致培训和评估的不一致。在本文中,我们提出了一个基于对比度学习的重新排列框架,用于一阶段的摘要,称为COLO。通过建模对比目标,我们表明摘要模型能够根据摘要级别的分数直接生成摘要,而无需其他模块和参数。广泛的实验表明,CORO在CNN/DailyMail基准测试中提高了单阶段系统的提取和抽象结果,将其提高到44.58和46.33 Rouge-1得分,同时保留了参数效率和推断效率。与最先进的多阶段系统相比,我们节省了100多个GPU训练时间,并在推理期间获得3〜8加速比,同时保持可比的结果。
translated by 谷歌翻译
事件参数提取(EAE)的目的是从文本中提取具有给定角色的参数,这些参数已在自然语言处理中得到广泛研究。以前的大多数作品在具有专用神经体系结构的特定EAE数据集中取得了良好的性能。鉴于,这些架构通常很难适应具有各种注释模式或格式的新数据集/方案。此外,他们依靠大规模标记的数据进行培训,由于大多数情况下的标签成本高,因此无法获得培训。在本文中,我们提出了一个具有变异信息瓶颈的多格式转移学习模型,该模型利用了信息,尤其是新数据集中EAE现有数据集中的常识。具体而言,我们引入了一个共享特定的及时框架,以从具有不同格式的数据集中学习格式共享和格式特定的知识。为了进一步吸收EAE的常识并消除无关的噪音,我们将变异信息瓶颈整合到我们的体系结构中以完善共享表示。我们在三个基准数据集上进行了广泛的实验,并在EAE上获得新的最先进的性能。
translated by 谷歌翻译
多跳的推理需要汇总多个文档来回答一个复杂的问题。现有方法通常将多跳问题分解为更简单的单跳问题,以解决说明可解释的推理过程的问题。但是,他们忽略了每个推理步骤的支持事实的基础,这往往会产生不准确的分解。在本文中,我们提出了一个可解释的逐步推理框架,以在每个中间步骤中同时合并单跳支持句子识别和单跳问题生成,并利用当前跳跃的推断,直到推理最终结果。我们采用统一的读者模型来进行中级跳跃推理和最终的跳跃推理,并采用关节优化,以更准确,强大的多跳上推理。我们在两个基准数据集HOTPOTQA和2WIKIMULTIHOPQA上进行实验。结果表明,我们的方法可以有效地提高性能,并在不分解监督的情况下产生更好的解释推理过程。
translated by 谷歌翻译
尽管在情感分析方面取得了巨大的成功,但现有的神经模型在隐式情感分析中挣扎。这可能是由于它们可能会锁定虚假的相关性(例如,“捷径”,例如,仅关注明确的情感词),从而破坏了学习模型的有效性和鲁棒性。在这项工作中,我们提出了一种使用仪器变量(ISAIV)的因果干预模型,用于隐式情感分析。我们首先从因果角度审查情感分析,并分析此任务中存在的混杂因素。然后,我们引入了一个仪器变量,以消除混杂的因果效应,从而在句子和情感之间提取纯粹的因果效应。我们将所提出的ISAIV模型与几个强大的基线进行比较,同时是一般的隐式情感分析和基于方面的隐式情感分析任务。结果表明我们模型的巨大优势以及隐性情感推理的功效。
translated by 谷歌翻译